Linear Tensor Networks
At a Glance
- Linear tensor networks represent high-dimensional tensors as chains of smaller tensors connected by contractions.
- Matrix Product State (MPS) represents state vectors and wavefunctions in 1D quantum systems.
- Matrix Product Operator (MPO) represents operators and Hamiltonians in 1D quantum systems.
- Many interesting observables and Hamiltonians have an exact MPO representation.
- The bond dimension controls the compression ratio and approximation quality.
- Great tools for efficient simulation of low-dimensionnal quantum many-body systems.
Overview
A tensor network is a structured representation of high-dimensional tensors as chains of lower-order tensors. In 1D systems, this takes the form of either a Matrix Product State (MPS) for state vectors or a Matrix Product Operator (MPO) for operators.
Rather than storing a tensor of dimension directly, a linear tensor network decomposes it into smaller tensors connected by "bond" indices. This hierarchical structure enables:
- Compression: Approximations with reduced bond dimension dramatically lower memory usage. With bond dimensions , the memory requirement becomes .
- Efficient algorithms: Operations like variational optimization become tractable.
For an introduction to tensors and their representation in Aleph, see the Tensor concept page. For compression techniques using factorization, see Factorization.
The Key Concept: Bond Dimension
The structure of a linear tensor network is defined by its bond dimensions (also called bond indices). At each connection between neighboring tensors, there is a shared index of size . These bond dimensions can vary across the chain:
- Memory usage: With varying bond dimensions , memory scales as where is the physical dimension at site . Alternatively, using the maximum bond dimension , memory is roughly where is the number of sites.
- Approximation quality: Larger bond dimensions allow more information and better approximations
- Computational cost: Operations typically scale with the maximum bond dimension
For exact representations, equals the effective matrix rank at bond . For approximations, bond dimensions are truncated to balance accuracy and efficiency. Importantly, optimal representations often have smaller bonds near the boundaries and larger bonds in the center.
Code example:
// Create a random MPS with physical dimension 2 (e.g., spin-1/2)
// and bond dimension 16
var psi = make_random_mps(10, 2, 16, as_complex)
print(psi.size()) // Number of sites
print(psi.max_bond()) // Maximum bond dimension
Matrix Product States (MPS)
An MPS decomposes a quantum state vector as a chain of tensors:
where:
- Each is a site tensor with indices: bond-left , physical , bond-right
- Physical dimension is the local Hilbert space dimension (e.g., 2 for qubits)
- Bond dimension is the size of the bond indices , and can vary across the chain
- The boundary tensors have bond dimension 1 on the open edges:
Canonical Form
An MPS can be put into canonical form with an orthogonality center at position :
- All tensors to the left of are in left-canonical form (left-orthogonal)
- All tensors to the right of are in right-canonical form (right-orthogonal)
- The orthogonality center tensor contains all the weight
This is useful because:
- Operations at the orthogonality center act on an effective small problem
- Left and right boundaries automatically satisfy certain constraints
- Algorithms can sweep through the chain systematically
Code example:
// Set orthogonality center to position 5
var psi = make_random_mps(10, 2, 16, as_complex)
psi.set_orthogonality_center(5)
// Check if in canonical form
if (psi.is_canonical()) {
print("MPS is in canonical form")
}
// Compute inner product with another MPS
var phi = make_random_mps(10, 2, 16, as_complex)
var overlap = psi.dot(phi)
print(overlap) // Complex number
Creating and Manipulating MPS
// Create an MPS for 16 sites with physical dimension 2
var mps = make_random_mps(10, 2, 16, as_complex)
// Get the tensor at site 3
var A3 = mps.get_site(3)
print(A3.shape()) // [bond_left, physical, bond_right]
// Note: bond dimensions may vary; this shows actual dimensions for site 3
// Check maximum bond dimension across the chain
print(mps.max_bond()) // Returns largest bond dimension used
// Normalize the MPS to unit norm
mps.normalize()
// Access site tensors directly
var first_site = mps[0]
var last_site = mps[-1] // Negative indexing supported
Matrix Product Operators (MPO)
An MPO represents an operator in a similar chain structure:
where each is a site tensor with:
- Physical index (out)
- Physical index (in)
- Bond indices (left and right)
MPOs are commonly used for:
- Hamiltonians in quantum simulations
- Observables for computing expectation values
- Projectors and other many-body operators
- Time-evolution operators in algorithms like TEBD
For constructing Hamiltonians and observables, use the Model class which provides a high-level interface for defining quantum systems. This is more convenient than manually constructing MPO tensors.
Code example:
// Create a length 10 MPO, with local dimension 2 and uniform bond dimension 4
var mpo = make_random_mpo(10,2,4)
// Get maximum bond dimension
print(mpo.max_bond())
// Access and set site tensors
var W = mpo.get_site(0)
mpo.set_site(0, W)
// Convert to full matrix representation (expensive for large systems!)
var full_matrix = mpo.matrix()
print(full_matrix.shape()) // [2^10, 2^10] matrix
Contraction and Overlap Computations
The primary use of linear tensor networks is efficient contraction:
MPS-MPS Overlap (Inner Product)
Computing the overlap between two MPS is efficient:
This can be computed in time by sweeping through the chain once, where is the maximum bond dimension. This is much better than the naive scaling for the full tensor.
var psi = make_random_mps(10, 10, 16, as_complex)
var phi = make_random_mps(10, 10, 16, as_complex)
psi.set_orthogonality_center(0)
psi.normalize()
phi.set_orthogonality_center(-1)
phi.normalize()
// Efficient inner product computation
var overlap = psi.dot(phi)
print(overlap) // Complex scalar with norm less than or equal to 1.
Why Linear Tensor Networks Matter
1D Quantum Systems
Linear tensor networks are the natural choice for 1D quantum systems because:
- The entanglement in ground states of local Hamiltonians is limited (area law)
- Bounded bond dimension can capture the physics exactly or with controlled approximation
- Algorithms like DMRG can efficiently optimize the tensor structure
Dimensional Reduction
For high-dimensional tensors (e.g., dimensional state vectors in a 100-qubit system), direct storage is impossible. MPS provides a compressed representation that:
- Reduces memory from exponential to polynomial in system size
- Enables algorithms that would otherwise be intractable
- Maintains mathematical structure for efficient computation
Variational Algorithms
The MPS structure enables powerful variational algorithms:
- DMRG (Density Matrix Renormalization Group): sweeps through sites, optimizing tensors
- Variational quantum algorithms: use MPS as an ansatz for classical simulation
- Time evolution: algorithms for simulating quantum dynamics
Key Properties and Operations
Orthogonality Center
The orthogonality center position is crucial for algorithms:
var mps = make_random_mps(10, 2, 16, as_complex)
// Get current center
var center = mps.get_orthogonality_center()
// Move center to a new position (efficiently via orthogonalization)
mps.set_orthogonality_center(7)
// After sweeps, center might shift
mps.adjust_orthogonality_center(5)
// Check if still canonical
if (! mps.is_canonical()) {
mps.set_orthogonality_center(0)
}
Normalization
var mps = make_random_mps(10, 2, 16, as_complex)
// Normalize in place
mps.normalize()
// Get a normalized copy without modifying original
var mps_normalized = mps.normalized()
Applications in Aleph
- Ground state calculation with DMRG
- State overlap and fidelity
See Also
- Tensor: Introduction to tensors and their representation
- Factorization: Decomposition techniques like SVD
- Contraction: Combining tensors via contractions
- DMRG: The Density Matrix Renormalization Group algorithm